Musk, Experts Urge AI Pause, Citing 'Risks to Society'

2023-03-31

00:00 / 00:00
复读宝 RABC v8.0beta 复读机按钮使用说明
播放/暂停
停止
播放时:倒退3秒/复读时:回退AB段
播放时:快进3秒/复读时:前进AB段
拖动:改变速度/点击:恢复正常速度1.0
拖动改变复读暂停时间
点击:复读最近5秒/拖动:改变复读次数
设置A点
设置B点
取消复读并清除AB点
播放一行
停止播放
后退一行
前进一行
复读一行
复读多行
变速复读一行
变速复读多行
LRC
TXT
大字
小字
滚动
全页
1
  • Hundreds of artificial intelligence experts and industry leaders are urging for a suspension in development of some AI technology.
  • 2
  • They say that the most powerful AI technology could present extreme risks to humanity and social order.
  • 3
  • The group released an open letter about the issue this/last week.
  • 4
  • It referenced the recent release of a fourth version of the popular AI program ChatGPT.
  • 5
  • "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than" ChatGPT-4," the letter says.
  • 6
  • The product comes from Microsoft-backed developer OpenAI.
  • 7
  • It performs human-like discussions and creative abilities.
  • 8
  • "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter continues.
  • 9
  • The non-profit group Future of Life Institute released the letter signed by about a thousand AI scientists, experts and industry members, including Elon Musk.
  • 10
  • The Musk Foundation is the main financial backer of Future of Life.
  • 11
  • It also receives money from the London-based group Founders Pledge, and Silicon Valley Community Foundation.
  • 12
  • Elon Musk is one of the co-founders of OpenAI.
  • 13
  • His electric car company, Tesla, uses AI in models with self-driving systems.
  • 14
  • Musk has been critical about efforts to regulate the self-driving system.
  • 15
  • But, now, he is hoping an agency is created to make sure the development of AI serves the public.
  • 16
  • "It is ... deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against" AI regulation in his self-driving cars, said James Grimmelmann.
  • 17
  • He is a professor of digital and information law at Cornell University.
  • 18
  • Last month, Tesla had to recall from owners more than 362,000 of its U.S. vehicles.
  • 19
  • The company had to update software after U.S. regulators said the driver assistance system could cause crashes.
  • 20
  • At the time, Musk tweeted that the word "recall" for a software update is "just flat wrong!"
  • 21
  • However, Grimmelmann did not disagree with the idea of a temporary break.
  • 22
  • "A pause is a good idea," he said, "but the letter is vague and doesn't take the regulatory problems seriously."
  • 23
  • The letter suggests shared safety measures could be developed during the proposed suspension.
  • 24
  • It also calls on developers to work with policymakers on governance.
  • 25
  • The letter noted danger linked especially to "human-competitive intelligence."
  • 26
  • The writers ask, "Should we develop nonhuman minds that might eventually outnumber, outsmart...and replace us?" They also say that such decisions should not be made by "unelected tech leaders."
  • 27
  • Yoshua Bengio, often described as one of the "godfathers of AI," was also a signer.
  • 28
  • Stuart Russell, a lead researcher in the field, put his name on the letter as well.
  • 29
  • Business leaders who signed include Stability AI CEO Emad Mostaque.
  • 30
  • The concerns come as U.S. lawmakers begin to question ChatGPT's effect on national security and education.
  • 31
  • The European Union police force warned recently about the possible misuse of the system in phishing attempts, disinformation and crime.
  • 32
  • Gary Marcus is a professor at New York University who signed the letter.
  • 33
  • He said development should slow until more is learned.
  • 34
  • "The letter isn't perfect, but the spirit is right: we need to slow down until we better understand" the technology, he said.
  • 35
  • Since its release last year, ChatGPT has led other companies like Google to create similar AI systems.
  • 36
  • Suresh Venkatasubramanian is a professor at Brown University and former assistant director in the White House Office of Science and Technology Policy.
  • 37
  • He said that a lot of the power to create these systems is usually in the hands of a few large companies.
  • 38
  • "That's how these models are, they're hard to build and they're hard to democratize."
  • 1
  • Hundreds of artificial intelligence experts and industry leaders are urging for a suspension in development of some AI technology. They say that the most powerful AI technology could present extreme risks to humanity and social order.
  • 2
  • The group released an open letter about the issue this/last week. It referenced the recent release of a fourth version of the popular AI program ChatGPT.
  • 3
  • "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than" ChatGPT-4," the letter says.
  • 4
  • The product comes from Microsoft-backed developer OpenAI. It performs human-like discussions and creative abilities.
  • 5
  • "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter continues.
  • 6
  • The non-profit group Future of Life Institute released the letter signed by about a thousand AI scientists, experts and industry members, including Elon Musk.
  • 7
  • The Musk Foundation is the main financial backer of Future of Life. It also receives money from the London-based group Founders Pledge, and Silicon Valley Community Foundation.
  • 8
  • Elon Musk is one of the co-founders of OpenAI. His electric car company, Tesla, uses AI in models with self-driving systems.
  • 9
  • Musk has been critical about efforts to regulate the self-driving system. But, now, he is hoping an agency is created to make sure the development of AI serves the public.
  • 10
  • "It is ... deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against" AI regulation in his self-driving cars, said James Grimmelmann. He is a professor of digital and information law at Cornell University.
  • 11
  • Last month, Tesla had to recall from owners more than 362,000 of its U.S. vehicles. The company had to update software after U.S. regulators said the driver assistance system could cause crashes. At the time, Musk tweeted that the word "recall" for a software update is "just flat wrong!"
  • 12
  • However, Grimmelmann did not disagree with the idea of a temporary break. "A pause is a good idea," he said, "but the letter is vague and doesn't take the regulatory problems seriously."
  • 13
  • The letter suggests shared safety measures could be developed during the proposed suspension. It also calls on developers to work with policymakers on governance.
  • 14
  • The letter noted danger linked especially to "human-competitive intelligence."
  • 15
  • The writers ask, "Should we develop nonhuman minds that might eventually outnumber, outsmart...and replace us?" They also say that such decisions should not be made by "unelected tech leaders."
  • 16
  • Yoshua Bengio, often described as one of the "godfathers of AI," was also a signer. Stuart Russell, a lead researcher in the field, put his name on the letter as well. Business leaders who signed include Stability AI CEO Emad Mostaque.
  • 17
  • The concerns come as U.S. lawmakers begin to question ChatGPT's effect on national security and education. The European Union police force warned recently about the possible misuse of the system in phishing attempts, disinformation and crime.
  • 18
  • Gary Marcus is a professor at New York University who signed the letter. He said development should slow until more is learned. "The letter isn't perfect, but the spirit is right: we need to slow down until we better understand" the technology, he said.
  • 19
  • Since its release last year, ChatGPT has led other companies like Google to create similar AI systems.
  • 20
  • Suresh Venkatasubramanian is a professor at Brown University and former assistant director in the White House Office of Science and Technology Policy.
  • 21
  • He said that a lot of the power to create these systems is usually in the hands of a few large companies.
  • 22
  • "That's how these models are, they're hard to build and they're hard to democratize."
  • 23
  • Dan Novak adapted this story for VOA Learning English based on reporting by Reuters.
  • 24
  • ________________________________________________________________
  • 25
  • Words in This Story
  • 26
  • confident - adj. having a feeling or belief that you can do something well or succeed at something
  • 27
  • positive - adj. good or useful
  • 28
  • regulate - v. to make rules or laws that control
  • 29
  • hypocritical - adj. a person who claims or pretends to have certain beliefs about what is right but who behaves in a way that disagrees with those beliefs
  • 30
  • vague - adj. not clear in meaning
  • 31
  • recall - v. to remember from the past
  • 32
  • phishing - n. the practice of sending emails pretending to be from real companies in order to get individuals to reveal personal information